ai component
Taming Silent Failures: A Framework for Verifiable AI Reliability
Abstract--The integration of Artificial Intelligence (AI) into safety-critical systems introduces a new reliability paradigm: silent failures, where AI produces confident but incorrect outputs that can be dangerous. This paper introduces the Formal Assurance and Monitoring Environment (FAME), a novel framework that confronts this challenge. FAME synergizes the mathematical rigor of offline formal synthesis with the vigilance of online runtime monitoring to create a verifiable safety net around opaque AI components. We demonstrate its efficacy in an autonomous vehicle perception system, where FAME successfully detected 93.5% of critical safety violations that were otherwise silent. By contextualizing our framework within the ISO 26262 and ISO/P AS 8800 standards, we provide reliability engineers with a practical, certifiable pathway for deploying trustworthy AI. FAME represents a crucial shift from accepting probabilistic performance to enforcing provable safety in next-generation systems. From driver assistance to computer-aided diagnosis (CAD), data-driven components promise superhuman perception and decision support. Y et they also introduce a reliability problem that differs from classical, code-centric software engineering: silent failure, confident outputs that are wrong, with no explicit crash, exception, or error code exposed to the rest of the stack [1], [2]. Safety-critical traditional software is developed under rigorous processes (requirements traceability, design assurance, redundancy, and diagnostics) and can exhibit multiple failure modes (e.g., fail-silent, latent, Byzantine), which are analyzed and mitigated through established standards and verification activities. In contrast, the correctness of learning-enabled components depends on data distributions as much as on code, and can degrade under distribution shift, sensor faults, or occlusions without tripping conventional diagnostics [1]. Standard testing is insufficient, as the input space of production DNNs is hyper-dimensional and cannot be exhaustively exercised [3].
AI Safety Assurance in Electric Vehicles: A Case Study on AI-Driven SOC Estimation
Skoglund, Martin, Warg, Fredrik, Mirzai, Aria, Thorsen, Anders, Lundgren, Karl, Folkesson, Peter, Havers-zulka, Bastian
Integrating Artificial Intelligence (AI) technology in electric vehicles (EV) introduces unique challenges for safety assurance, particularly within the framework of ISO 26262, which governs functional safety in the automotive domain. Traditional assessment methodologies are not geared toward evaluating AI-based functions and require evolving standards and practices. This paper explores how an independent assessment of an AI component in an EV can be achieved when combining ISO 26262 with the recently released ISO/PAS 8800, whose scope is AI safety for road vehicles. The AI-driven State of Charge (SOC) battery estimation exemplifies the process. Key features relevant to the independent assessment of this extended evaluation approach are identified. As part of the evaluation, robustness testing of the AI component is conducted using fault injection experiments, wherein perturbed sensor inputs are systematically introduced to assess the component's resilience to input variance.
- Europe > Sweden > Vaestra Goetaland > Gothenburg (0.40)
- North America > United States (0.05)
- Europe > Switzerland (0.04)
- (3 more...)
- Transportation > Ground > Road (1.00)
- Transportation > Electric Vehicle (1.00)
- Automobiles & Trucks (1.00)
DURA-CPS: A Multi-Role Orchestrator for Dependability Assurance in LLM-Enabled Cyber-Physical Systems
Srinivasan, Trisanth, Patapati, Santosh, Musku, Himani, Gode, Idhant, Arora, Aditya, Bhattacharya, Samvit, Nazriev, Abubakr, Hirave, Sanika, Kanjiani, Zaryab, Ghose, Srinjoy
Cyber-Physical Systems (CPS) increasingly depend on advanced AI techniques to operate in critical applications. However, traditional verification and validation methods often struggle to handle the unpredictable and dynamic nature of AI components. In this paper, we introduce DURA-CPS, a novel framework that employs multi-role orchestration to automate the iterative assurance process for AI-powered CPS. By assigning specialized roles (e.g., safety monitoring, security assessment, fault injection, and recovery planning) to dedicated agents within a simulated environment, DURA-CPS continuously evaluates and refines AI behavior against a range of dependability requirements. We demonstrate the framework through a case study involving an autonomous vehicle navigating an intersection with an AI-based planner. Our results show that DURA-CPS effectively detects vulnerabilities, manages performance impacts, and supports adaptive recovery strategies, thereby offering a structured and extensible solution for rigorous V&V in safety- and security-critical systems.
- North America > United States > Pennsylvania (0.04)
- North America > United States > Montana (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Information Technology > Security & Privacy (1.00)
- Transportation (0.69)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.71)
On the Variability of AI-based Software Systems Due to Environment Configurations
Rahman, Musfiqur, Khatoonabadi, SayedHassan, Abdellatif, Ahmad, Samaana, Haya, Shihab, Emad
Software systems are inherently complex. In addition, any ML model is, at its core, probabilistic in nature and hence, suffers from the challenge of uncertainty [2, 3, 4]. The complexity of a software system combined with the non-deterministic nature of an ML model can introduce variability - the phenomenon where a piece of software behaves differently when the development or the runtime environment changes although the internal software artifacts such as code, and input data are exactly the same. In practice it is very likely that development and deployment environments are different, hence, understanding how an ML model may behave differently after deployment compared to how it behaved in the development environment is a crucial aspect of AI-based software development. For example, an arbitrary face recognition system achieving an F1-score of, say 0.9, in the development environment does not guarantee that it will on average achieve a similar F1-score once deployed in a different environment configuration.
- North America > Canada > Alberta > Census Division No. 6 > Calgary Metropolitan Region > Calgary (0.14)
- Europe > Austria > Vienna (0.14)
- North America > Canada > Quebec > Montreal (0.04)
- (5 more...)
SoK: On the Semantic AI Security in Autonomous Driving
Shen, Junjie, Wang, Ningfei, Wan, Ziwen, Luo, Yunpeng, Sato, Takami, Hu, Zhisheng, Zhang, Xinyang, Guo, Shengjian, Zhong, Zhenyu, Li, Kang, Zhao, Ziming, Qiao, Chunming, Chen, Qi Alfred
Autonomous Driving (AD) systems rely on AI components to make safety and correct driving decisions. Unfortunately, today's AI algorithms are known to be generally vulnerable to adversarial attacks. However, for such AI component-level vulnerabilities to be semantically impactful at the system level, it needs to address non-trivial semantic gaps both (1) from the system-level attack input spaces to those at AI component level, and (2) from AI component-level attack impacts to those at the system level. In this paper, we define such research space as semantic AI security as opposed to generic AI security. Over the past 5 years, increasingly more research works are performed to tackle such semantic AI security challenges in AD context, which has started to show an exponential growth trend. In this paper, we perform the first systematization of knowledge of such growing semantic AD AI security research space. In total, we collect and analyze 53 such papers, and systematically taxonomize them based on research aspects critical for the security field. We summarize 6 most substantial scientific gaps observed based on quantitative comparisons both vertically among existing AD AI security works and horizontally with security works from closely-related domains. With these, we are able to provide insights and potential future directions not only at the design level, but also at the research goal, methodology, and community levels. To address the most critical scientific methodology-level gap, we take the initiative to develop an open-source, uniform, and extensible system-driven evaluation platform, named PASS, for the semantic AD AI security research community. We also use our implemented platform prototype to showcase the capabilities and benefits of such a platform using representative semantic AD AI attacks.
- Asia > China (0.04)
- North America > United States > Texas (0.04)
- North America > United States > California (0.04)
- (4 more...)
- Research Report (1.00)
- Overview (0.93)
- Transportation > Ground > Road (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Robotics & Automation (1.00)
- Government > Regional Government > North America Government > United States Government (0.45)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness
Llorca, David Fernández, Hamon, Ronan, Junklewitz, Henrik, Grosse, Kathrin, Kunze, Lars, Seiniger, Patrick, Swaim, Robert, Reed, Nick, Alahi, Alexandre, Gómez, Emilia, Sánchez, Ignacio, Kriston, Akos
Artificial Intelligence (AI) plays a critical role in the advancement of autonomous driving. It is likely the main facilitator of high levels of automation, as there are certain technical issues that only seem to be resolvable through advanced AI systems, particularly those based on machine learning. However, the introduction of AI systems in the realm of driver assistance systems and automated driving systems creates new uncertainties due to specific characteristics of AI that make it a distinct technology from traditional systems developed in the field of motor vehicles. Some of these characteristics include unpredictability, opacity, self and continuous learning and lack of causality [1], among other horizontal features such as autonomy, complexity, overfitting and bias. As an example of the specificity that the introduction of AI systems in vehicles entails, the UNECE's Working Party on Automated/Autonomous and Connected Vehicles (GRVA) has been specifically discussing the impact of AI on vehicle regulations since 2020 [2].
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Germany (0.14)
- South America > Uruguay > Maldonado > Maldonado (0.04)
- (13 more...)
- Overview (1.00)
- Research Report > Experimental Study (0.45)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Information Technology > Security & Privacy (1.00)
- (3 more...)
A Cybersecurity Risk Analysis Framework for Systems with Artificial Intelligence Components
Camacho, Jose Manuel, Couce-Vieira, Aitor, Arroyo, David, Insua, David Rios
The introduction of the European Union Artificial Intelligence Act, the NIST Artificial Intelligence Risk Management Framework, and related norms demands a better understanding and implementation of novel risk analysis approaches to evaluate systems with Artificial Intelligence components. This paper provides a cybersecurity risk analysis framework that can help assessing such systems. We use an illustrative example concerning automated driving systems.
- Europe > Spain (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (2 more...)
- Transportation > Ground > Road (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Towards Modelling and Verification of Social Explainable AI
Kurpiewski, Damian, Jamroga, Wojciech, Sidoruk, Teofil
Social Explainable AI (SAI) is a new direction in artificial intelligence that emphasises decentralisation, transparency, social context, and focus on the human users. SAI research is still at an early stage. Consequently, it concentrates on delivering the intended functionalities, but largely ignores the possibility of unwelcome behaviours due to malicious or erroneous activity. We propose that, in order to capture the breadth of relevant aspects, one can use models and logics of strategic ability, that have been developed in multi-agent systems. Using the STV model checker, we take the first step towards the formal modelling and verification of SAI environments, in particular of their resistance to various types of attacks by compromised AI modules.
- Europe > Poland > Masovia Province > Warsaw (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (2 more...)
What, Indeed, is an Achievable Provable Guarantee for Learning-Enabled Safety Critical Systems
Bensalem, Saddek, Cheng, Chih-Hong, Huang, Wei, Huang, Xiaowei, Wu, Changshun, Zhao, Xingyu
Machine learning has made remarkable advancements, but confidently utilising learning-enabled components in safety-critical domains still poses challenges. Among the challenges, it is known that a rigorous, yet practical, way of achieving safety guarantees is one of the most prominent. In this paper, we first discuss the engineering and research challenges associated with the design and verification of such systems. Then, based on the observation that existing works cannot actually achieve provable guarantees, we promote a two-step verification method for the ultimate achievement of provable statistical guarantees.
- Europe > France > Auvergne-Rhône-Alpes > Isère > Grenoble (0.04)
- Asia > India > Maharashtra > Pune (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (9 more...)
- Research Report (0.64)
- Overview (0.46)
A Graphical Modeling Language for Artificial Intelligence Applications in Automation Systems
Schieseck, Marvin, Topalis, Philip, Fay, Alexander
Artificial Intelligence (AI) applications in automation systems are usually distributed systems whose development and integration involve several experts. Each expert uses its own domain-specific modeling language and tools to model the system elements. An interdisciplinary graphical modeling language that enables the modeling of an AI application as an overall system comprehensible to all disciplines does not yet exist. As a result, there is often a lack of interdisciplinary system understanding, leading to increased development, integration, and maintenance efforts. This paper therefore presents a graphical modeling language that enables consistent and understandable modeling of AI applications in automation systems at system level. This makes it possible to subdivide individual subareas into domain specific subsystems and thus reduce the existing efforts.
- Research Report (0.50)
- Workflow (0.47)
- Information Technology > Software Engineering (1.00)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Communications > Networks (0.68)